Back

Behavior Research Methods

Springer Science and Business Media LLC

All preprints, ranked by how well they match Behavior Research Methods's content profile, based on 25 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.

1
Modular Streaming Pipeline of Eye/Head Tracking Data Using Tobii Pro Glasses 3

Rahimi Nasrabadi, H.; Alonso, J.-M.

2022-09-05 animal behavior and cognition 10.1101/2022.09.02.506255 medRxiv
Top 0.1%
22.6%
Show abstract

AO_SCPLOWBSTRACTC_SCPLOWHead-mounted tools for eye/head tracking are increasingly used for assessment of visual behavior in navigation, sports, sociology, and neuroeconomics. Here we introduce an open-source python software (TP3Py) for collection and analysis of portable eye/head tracking signals using Tobii Pro Glasses 3. TP3Pys modular pipeline provides a platform for incorporating user-oriented functionalities and comprehensive data acquisition to accelerate the development in behavioral and tracking research. Tobii Pro Glasses 3 is equipped with embedded cameras viewing the visual scene and the eyes, inertial measurement unit (IMU) sensors, and video-based eye tracker implemented in the accompanying unit. The program establishes a wireless connection to the glasses and, within separate threads, continuously leverages the received data in numerical or string formats accessible for saving, processing, and graphical purposes. Built-in modules for presenting eye, scene, and IMU data to the experimenter have been adapted as well as communicating modules for sending the raw signals to stimulus/task controllers in live fashion. Closed-loop experimental designs are limited due to the 140ms time delay of the system, but this limitation is compensated by the portability of the eye/head tracking. An offline data viewer has been also incorporated to allow more time-consuming computations. Lastly, we demonstrate example recordings involving vestibulo-ocular reflexes, saccadic eye movements, optokinetic responses, or vergence eye movements to highlight the programs measurement capabilities to address various experimental goals. TP3Py has been tested on Windows with Intel processors, and Ubuntu operating systems with Intel or ARM (Raspberry Pie) architectures.

2
Evaluating the validity of the eye movement event detection model of Ganzin Sol glasses

Chien, S.-E.; Lee, K.; Chang, C.-Y.; Lu, S.-C.; Chien, S.-Y.

2023-12-19 neuroscience 10.1101/2023.12.18.572270 medRxiv
Top 0.1%
18.5%
Show abstract

Eye tracking requires precise measurement of gaze data. Identifying fixations and saccades reliably is crucial for eye trackers, given these are fundamental eye movements. The present study evaluated the performance of Ganzin Sol glasses, a wearable eye tracker developed by Ganzin Technology, in detecting eye movement events. Participants performed the fixation and saccade invocation task (Komogortsev et al., 2010) and the gap paradigm (Saslow, 1967) using both Ganzin Sol and Tobii Pro 2 glasses separately at both short (50 cm) and long (300 cm) viewing distance. The fixation and saccade invocation task involved maintaining fixation on a regularly shifting visual target, enabling quantitative and qualitative analysis of participants oculomotor behaviors. In the gap paradigm, participants executed saccades toward a peripheral target when the fixation disappeared (gap condition) or remained visible (overlap condition). Typically, saccade latency in the gap condition is shorter (i.e., the gap effect). Results revealed aligned performances with Ganzin Sol and Tobii Pro 2 glasses in the fixation and saccade invocation task and the observation of the gap effect with both eye trackers. Therefore, the validity of Ganzin Sol glasses was at least comparable to Tobii Pro 2 glasses in the present study.

3
Attention and Retention Effects of Culturally Targeted Billboard Messages: An Eye-Tracking Study Using Immersive Virtual Reality

Jeon, M.; Lim, S.; Lapinski, M. K.; Bente, G.; Spates, S. A.; Schmaezle, R.

2024-09-16 animal behavior and cognition 10.1101/2024.09.10.610975 medRxiv
Top 0.1%
14.9%
Show abstract

Targeting, the creation of a match between message content and receiver characteristics, is a key strategy in communication message design. Cultural targeting, or adapting message characteristics to be congruent with a groups cultural knowledge, appearance, or beliefs of recipients, is used in practice and is a potentially effective strategy to boost the relevance of a message, affecting attention to messages and enhancing effects. However, many open questions remain regarding the mechanisms and consequences of targeting. This is partly due to methodological challenges in experimentally manipulating messages that match cultural recipient characteristics while simultaneously measuring effects and balancing experimental control and realism. Here, we used a novel VR-based paradigm in which participants drove along a virtual highway flanked by billboards with varying message designs. Specifically, we manipulated the message design to either match or mismatch peoples cultures of origin. We used unobtrusive eye tracking to assess participants attention (i.e., for how long and how often they look at matched vs. unmatched billboards). Results show a tendency of the participants to inspect culturally matched billboards more often and for longer. We further found that matched billboards produce better recall, indicating more efficient encoding and storage of the messages. Our results underscore the effectiveness of cultural targeting and demonstrate how researchers can rigorously manipulate relevant message factors using virtual environments. We discuss the implications of these findings regarding theories of cultural targeting and methodological perspectives for the objective measurement of exposure factors through eye tracking.

4
Improving the utility and accuracy of wearable light loggers and optical radiation dosimeters through auxiliary data, quality assurance, and quality control

Zauner, J.; Stefani, O.; Abarca, G. B.; Guidolin, C.; Schrader, B.; Udovicic, L.; Spitschan, M.

2025-09-11 neuroscience 10.1101/2025.09.11.675633 medRxiv
Top 0.1%
14.7%
Show abstract

Wearable light loggers and optical radiation dosimeters are increasingly used in chronobiology and circadian health research, yet their data often lack contextual information (e.g., sleep, activity, environmental conditions) and may be compromised by non-wear periods, compliance issues, or technical faults. To address these limitations, we conducted interviews (n=21) and a survey (n=16) with domain experts to distill and iteratively develop auxiliary data and quality-control strategies aimed at improving the accuracy and interpretability of wearable light measurements. From this process, we established a six-domain auxiliary data framework encompassing wear/non-wear logging, sleep monitoring, light-source context, participant behaviour, user experience, and environmental light levels. Survey responses showed strong consensus on the value of auxiliary information (mean importance 4.0/5), with sleep and wear-time tracking rated as the most essential additions. To support practical adoption, we provide implementation tools, including extensions to the open-source R package LightLogR, enabling streamlined integration of wearable and auxiliary data as well as systematic quality assurance and control. Experts agreed that combining contextual records with rigorous QA/QC procedures substantially improves the reliability of field-collected light-exposure data. These recommendations and tools aim to help researchers in chronobiology, wearable sensing, and health sciences maximise data quality and enhance interpretation in real-world light-exposure studies.

5
What is behavior? No seriously, what is it?

Calhoun, A.; El Hady, A.

2021-07-07 animal behavior and cognition 10.1101/2021.07.04.451053 medRxiv
Top 0.1%
14.5%
Show abstract

Studying behavior lies at the heart of many disciplines. Nevertheless, academics rarely provide an explicit definition of what behavior actually is. What range of definitions do people use, and how does that vary across disciplines? To answer these questions we have developed a survey to probe what constitutes behavior. We find that academics adopt different definitions of behavior according to their academic discipline, animal model that they work with, and level of academic seniority. Using hierarchical clustering, we identify at least six distinct types of behavior which are used in seven distinct operational archetypes of behavior. Individual respondents have clear consistent definitions of behavior, but these definitions are not consistent across the population. Our study is a call for academics to clarify what they mean by behavior wherever they study it, with the hope that this will foster interdisciplinary studies that will improve our understanding of behavioral phenomena.

6
Why do we need high-fidelity synthetic eye movement data and how should they look like?

Qian, C. S.; Aziz, S.; Hasan, K.; Komogortsev, O. V.

2025-12-15 animal behavior and cognition 10.64898/2025.12.11.692112 medRxiv
Top 0.1%
13.8%
Show abstract

Eye tracking has been a popular behavioral recording method across psychology, neuroscience, and computer science, but a need for large and diverse datasets has emerged. Synthetic eye movement data offer a promising complement, yet it remains unclear which aspects of real oculomotor behavior they must capture. This paper has three objectives: to clarify why synthetic eye movement data are needed, to outline what high-fidelity synthetic signals should look like, and to demonstrate how existing longitudinal datasets and subjective reports can guide their design and validation. We analyzed the motivation for synthetic eye movements and presented a framework of eye movement variance: ocassion-specific or state-specific variance, between-individual variance, pipeline induced variance and noise. Finally, we analyze subjective reports collected alongside the GazeBase dataset, demonstrating some ocassion-specific variance in data and setting requirements for state-free synthetic eye movement signals.

7
EasyEyes - Accurate fixation for online vision testing of crowding and beyond

Kurzawski, J. W.; Pombo, M.; Burchell, A.; Hanning, N. M.; Liao, S.; Majaj, N. J.; Pelli, D.

2023-07-18 neuroscience 10.1101/2023.07.14.549019 medRxiv
Top 0.1%
13.6%
Show abstract

Online methods allow testing of larger, more diverse populations, with much less effort than in-lab testing. However, many psychophysical measurements, including visual crowding, require accurate eye fixation, which is classically achieved by testing only experienced observers who have learned to fixate reliably, or by using a gaze tracker to restrict testing to moments when fixation is accurate. Alas, both approaches are impractical online since online observers tend to be inexperienced, and online gaze tracking, using the built-in webcam, has a low precision ({+/-}4 deg, Papoutsaki et al., 2016). The EasyEyes open-source software reliably measures peripheral thresholds online with accurate fixation achieved in a novel way, without gaze tracking. EasyEyes tells observers to use the cursor to track a moving crosshair. At a random time during successful tracking, a brief target is presented in the periphery. The observer responds by identifying the target. To evaluate EasyEyes fixation accuracy and thresholds, we tested 12 naive observers in three ways in a counterbalanced order: first, in the lab, using gaze-contingent stimulus presentation (Kurzawski et al., 2023; Pelli et al., 2016); second, in the lab, using EasyEyes while independently monitoring gaze; third, online at home, using EasyEyes. We find that crowding thresholds are consistent (no significant differences in mean and variance of thresholds across ways) and individual differences are conserved. The small root mean square (RMS) fixation error (0.6 deg) during target presentation eliminates the need for gaze tracking. Thus, EasyEyes enables fixation-dependent measurements online, for easy testing of larger and more diverse populations.

8
Multilevel Modelling of Gaze from Hearing-impaired Listeners following a Realistic Conversation

Shiell, M. M.; Christensen, J. H.; Skoglund, M.; Keidser, G.; Zaar, J.; Rotger-Griful, S.

2022-11-09 neuroscience 10.1101/2022.11.08.515622 medRxiv
Top 0.1%
12.5%
Show abstract

PurposeThere is a need for outcome measures that predict real-world communication abilities in hearing-impaired people. We outline a potential method for this and use it to answer the question of when, and how much, hearing-impaired listeners look towards a new talker in a conversation. MethodTwenty-two older hearing-impaired adults followed a pre-recorded two-person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses. First, we split the conversation into events classified by the number of active talkers within a turn or a transition, and we tested if these predicted the listeners gaze. Second, we mapped the odds that a listener gazed towards a new talker over time during a conversation transition. ResultsWe found no evidence that our conversation events predicted changes in the listeners gaze, but the listeners gaze towards the new talker during a silent-transition was predicted by time: The odds of looking at the new talker increased in an s-shaped curve from at least 0.4 seconds before to 1 second after the onset of the new talkers speech. A comparison of models with different random effects indicated that more variance was explained by differences between individual conversation events than by differences between individual listeners. ConclusionMLR modelling of eye-gaze during talker transitions is a promising approach to study a listeners perception of realistic conversation. Our experience provides insight to guide future research with this method.

9
Invalidity of light sensor data in field studies and a proposal of an algorithmic approach for detection and filtering of non-wear time

Hunt, L. C.; Fritz, J.; Herf, M.; Vetter, C.

2021-08-12 neuroscience 10.1101/2021.08.11.455859 medRxiv
Top 0.1%
12.4%
Show abstract

Wearable light sensors are increasingly used in intervention and population-based studies investigating the consequences of environmental light exposure on human physiology. An important step in such analyses is the reliable detection of non-wear time. We observed in light data that days with less wear-time also have lower variability in the light signal, and we sought to test if the standard deviation of the change between subsequent samples can detect this condition. In this study, we propose and validate an easy-to-implement algorithm designed to discriminate between days with a non-wear time >4h ("invalid days") vs. [≤]4h ("valid days") and investigate to which extent values of commonly used physiologically meaningful light variables differ between invalid days, valid days, and algorithm-selected non-wear days. We used 83 days of light data from a field study with high participant compliance, complemented by 47 days of light data where free-living individuals were instructed not to wear the sensor for varying amounts of time. Light data were recorded every two minutes using the pendant-worn f.luxometer light sensor; validity was derived from daily logs where participants recorded all non-wear time. The algorithm-derived score discriminated well between valid and invalid days (area under the curve (AUC): 0.77, 95% CI: 0.67-0.87). The best cut-off value (i.e., highest Youden index) correctly recognized valid days with a probability of 87% ("sensitivity"), and invalid days with a probability of 63% ("specificity"). Values of various light variables derived from algorithm-selected days only (median: 264.3 (Q1: 153.6, Q3: 420.0) for 24h light intensity (in lux); 496.0 (404.0, 582.0) for time spent above 50-lux) gave values close to those derived from self-reported valid days only. However, these values did not significantly differ when derived across all days compared to self-reported valid days. Our results suggest that our proposed algorithm discriminates well between valid and invalid days. However, in high compliance cohorts, distortions in aggregated light measures of individual-level environmental light recordings across days appear to be small, making the application of our algorithm optional, but not necessary.

10
IntelliR: A comprehensive and standardized pipeline for automated profiling of higher cognition in mice

Daguano Gastaldi, V.; Hindermann, M.; Wilke, J. B.; Ronnenberg, A.; Arinrad, S.; Kraus, S.; Wildenburg, A.-F.; Ntolkeras, A.; Provost, M. J.; Ye, L.; Curto, Y.; Cortes-Silva, J.-A.; Butt, U. J.; Nave, K.-A.; Woznica Miskowiak, K.; Ehrenreich, H.

2024-01-25 animal behavior and cognition 10.1101/2024.01.25.577156 medRxiv
Top 0.1%
10.4%
Show abstract

In the rapidly evolving field of rodent behavior research, observer-independent methods facilitate data collection within a social, stress-reduced, and thus more natural environment. A prevalent system in this research area is the IntelliCage, which empowers experimenters to design individual tasks and higher cognitive challenges for mice, driven by their motivation to access reward. The extensive amount and diversity of data provided by the IntelliCage system explains the growing demand for automated analysis among users. Here, we introduce IntelliR, a standardized pipeline for analyzing raw data generated by the IntelliCage software, as well as novel parameters including the cognition index, which enables comparison of performance across various challenges. With IntelliR, we provide the tools to implement and automatically analyze 3 challenges that we designed, encompassing spatial, episodic-like, and working memory with their respective reversal tests. Using results from 3 independent control cohorts of adult female wildtype mice, we demonstrate their ability to comprehend and learn the tasks, thereby improving their proficiency over time. To validate the sensitivity of our approach for detecting cognitive impairment, we used adult female NexCreERT2xRosa26-eGFP-DTA mice after tamoxifen induced diphtheria toxin-mediated ablation of pyramidal neurons in cortex and hippocampus. We observed deterioration in learning capabilities and cognition index across several tests. IntelliR can be readily integrated into and adapted for individual research, thereby improving time management and reproducibility of data analysis. HIGHLIGHTSO_LIIntelliR is a standardized pipeline for analyzing raw data of IntelliCage software. C_LIO_LIDomains include spatial, episodic-like, and working memory with reversals. C_LIO_LIWT mice (3 cohorts) comprehend, learn and improve proficiency over time. C_LIO_LICognition index permits comparison of performance across cognitive domains. C_LIO_LIMice with ablation of pyramidal neurons decline mainly in working memory. C_LI

11
The VR Billboard Paradigm: Using VR and Eye-tracking to Examine the Exposure-Reception-Retention Link in Realistic Communication Environments

Schmaelzle, R.; Lim, S.; Cho, H. J.; Wu, J.; Bente, G.

2023-06-06 animal behavior and cognition 10.1101/2023.06.03.543559 medRxiv
Top 0.1%
10.2%
Show abstract

Exposure is the cornerstone of media and message effects research. If a health, political, or commercial message is not noticed, no effects can ensue. Yet, existing research in communication, advertising, and related disciplines often fails to measure exposure and demonstrate the causal link between quantified exposure to outcomes because actual exposure (i.e., whether recipients were not only exposed to messages but also took notice of them) is difficult to capture. Here, we harness Virtual Reality (VR) technology integrated with eye tracking to overcome this challenge. While eye-tracking technology alone can capture whether people attend to messages in their communication environment, most eye-tracking research is bound by laboratory-based screen-reading paradigms that are not representative of the broader communication environments in which messages are encountered. Emerging eye-tracking field research suffers from an inability to control and experimentally manipulate key variables. Our solution is to measure eye-tracking within an immersive environment in VR that resembles a realistic message reception context. Specifically, we simulate driving down a highway alongside which billboards are placed and use VR-integrated eye-tracking to measure whether the drivers look at individual billboard messages. This allows us to rigorously quantify the nexus between exposure and reception, and to link our measures to subsequent memory, i.e., whether messages were remembered, forgotten, or not even encoded. We further demonstrate that manipulating drivers attention directly impacts gaze behavior and memory. We discuss the large potential of this paradigm to quantify exposure and message reception in realistic communication environments and the equally promising applications in new media contexts (e.g., the Metaverse).

12
IntelliProfiler 2.0: An integrated R pipeline for long-term home-cage behavioral profiling in group-housed mice using eeeHive 2D

Ochi, S.; Azuma, M.; Hara, I.; Inada, H.; Takabayashi, K.; Osumi, N.

2026-02-11 animal behavior and cognition 10.64898/2026.02.10.705044 medRxiv
Top 0.1%
10.0%
Show abstract

BackgroundLong-term home-cage monitoring is essential to quantify spontaneous locomotor and social behaviors in group-housed mice, but analysis of high-density RFID tracking data remains a barrier to reproducibility. New methodsWe developed IntelliProfiler 2.0, a fully R-based pipeline tailored to the eeeHive 2D floor-mounted RFID array. The workflow performs data import from text logs, preprocessing, coordinate reconstruction, missing-value handling, feature extraction, statistical testing, and visualization in a single environment. Behavioral metrics include travel distance, close contact ratio (CCR), and a newly implemented inter-individual distance metric. ResultsIn four-day recordings of group-housed C57BL/6J mice (8 males and 8 females), IntelliProfiler 2.0 captured circadian phase-dependent locomotion and proximity patterns and reproduced sex-dependent differences consistent with prior analyses while incorporating updated hardware specifications. Radar-chart summaries enabled intuitive comparison of multidimensional behavioral profiles and inter-individual variability across light/dark phases. Comparison with existing methodsCompared with IntelliProfiler 1.0 and multi-tool workflows, IntelliProfiler 2.0 consolidates analysis into a single, script-based R pipeline, reducing operational complexity and improving reproducibility. The updated implementation supports recent manufacturer-driven changes, including antenna renumbering and multi-USB data export. ConclusionsIntelliProfiler 2.0 provides a reproducible, extensible framework for high-throughput behavioral phenotyping of group-housed mice and is scalable across hardware configurations, including simplified single-board recordings. HighlightsO_LIEnd-to-end R pipeline for eeeHive 2D floor-based RFID tracking analysis C_LIO_LIStandardized setup with comprehensive manuals and protocols C_LIO_LIInter-individual distance metric to quantify group spatial structure C_LIO_LICircadian- and sex-dependent behavioral profiling in group-housed mice C_LIO_LIRadar-charts summarize multidimensional behavioral profiles and variability C_LI

13
A Dynamic Threshold-Based Method for Robust and Accurate Blink Detection in Eye-Tracking Data

Khodami, M. A.

2025-04-23 neuroscience 10.1101/2025.04.21.649751 medRxiv
Top 0.1%
10.0%
Show abstract

Blink detection is a critical component of eye-tracking research, particularly in pupillometry, where data loss due to blinks can obscure meaningful insights. Existing methods often rely on fixed thresholds or device-specific noise profiles, which may lead to inaccuracies in detecting blink onsets and offsets, especially in heterogeneous datasets. This study introduces a novel blink detection model that dynamically adapts to varying pupil size distributions, ensuring robustness across different experimental conditions. The proposed method integrates dynamic thresholding, which adjusts based on the mean pupil size of valid samples, Gaussian smoothing, which reduces noise while preserving signal integrity, and adaptive boundary refinement, which refines blink onsets and offsets based on a trends in the smoothed data. Unlike traditional approaches that merge closely spaced blinks, this model treats each blink as an independent event, preserving temporal resolution, which is essential for cognitive and perceptual studies. The model is computationally efficient and adaptable to a wide range of sampling rates, from low-frequency (e.g., 250 Hz) to high-frequency (e.g., 2000 Hz) data, ensuring consistent blink detection across different eye-tracking setups, making it suitable for both real-time and offline eye-tracking applications. Experimental evaluations demonstrate its ability to accurately detect blinks across diverse datasets. By offering a more reliable and generalizable solution, this model advances blink detection methodologies and enhances the quality of eye-tracking data analysis across research domains.

14
Development of a Semantically Related Emotional and Neutral Stimulus Set

Barnacle, G. E.; Madan, C. R.; Talmi, D.

2021-01-19 neuroscience 10.1101/2021.01.18.424707 medRxiv
Top 0.1%
8.5%
Show abstract

When measuring memory performance for emotional and neutral stimuli many studies are confounded by not controlling for differential semantic relatedness between stimulus sets. This could lead to the misattribution of the cause of an emotional enhancement of memory effect (EEM), because differential semantic relatedness also contributes to the EEM. Participants rated static visual emotional and neutral scenes on measures of arousal, valence, and semantic relatedness. These measures were used to create a novel stimulus set, which - in addition to demonstrating significant differences in measures of valence and arousal - also controlled for within-set semantic relatedness; thus resolving a crucial issue that has not previously been addressed in the use of visual emotional stimuli. As an added advantage, the stimulus set developed here are controlled for measures of objective visual complexity, also implicated as confounding to the investigation of memory. This article introduces a collection of emotional and neutral colour images which can be organised flexibly according to experimental requirements. These stimuli are made freely available for non-commercial use within the scientific community.

15
A conveyor feeder for animal experiments

Oh, J.

2019-10-13 animal behavior and cognition 10.1101/801993 medRxiv
Top 0.1%
8.3%
Show abstract

Several different types of open source feeders have been used in animal experiments in cognitive biology, neuroscience, psychology and related fields. These feeders use either dry pellets, which have hard surface and simple shape, or liquid food types such as sucrose solution. These food types can be rather easily manipulated due to its physical attributes. Although it is beneficial in terms of controllability, animal subjects often lose motivation to interact with operant conditioning devices offering such food items. Using natural food items such as fruits, insects, worms and pieces of meat will be helpful to keep the subjects motivation high, however, those food items are not very well suited for currently available open source feeders due to its physical attributes, including its complex shape, sticky and delicate texture. We made a feeder to deliver such natural food items to animal subjects for operant conditioning, using relatively cheap and easily obtainable parts. For a validation, we built a full operant conditioning device for wolves and dogs, containing two of these feeders, a pressure-sensitive monitor and a speaker.\n\nSpecifications table\n\nO_TBL View this table:\norg.highwire.dtl.DTLVardef@f65995org.highwire.dtl.DTLVardef@1736cacorg.highwire.dtl.DTLVardef@e3b1c6org.highwire.dtl.DTLVardef@ac3e9eorg.highwire.dtl.DTLVardef@432ab4_HPS_FORMAT_FIGEXP M_TBL C_TBL

16
Changes in Ocular Fixation Characteristics Over Time during Reading

Friedman, L.; Komogortsev, O.

2025-06-11 neuroscience 10.1101/2025.06.09.658738 medRxiv
Top 0.1%
8.0%
Show abstract

In this report, we evaluate eye-movements during reading. There is a huge literature on this topic, but our report is not focused on the typical questions raised in this literature and our task design is very atypical for an eye-movement/reading study. While most reading studies evaluate mental processes during reading, the only mental process we evaluate is fatigue. While most reading studies use stimuli presented in the middle of the screen, our poem sections are presented in 24 lines of text that span the top to the bottom of the display. While most multi-line reading studies use a very large interline spacing, our interline spacing is more typical of text reading in newspapers, magazines and books. We report on changes in the characteristics of fixations as a function of time-on-task (TOT). We determine the start and end of reading for each subject/session and divide this reading time into 10 equal length (number of samples) periods (referred to as epochs). We looked for changes in fixation characteristics across epochs. We emphasize our results for horizontal position because these changes were monotonic and interpretable. For horizontal position signals, we found that the mean intersample distances in fixations increases, the rate of changes of fixation position decreases and the total fixation width increases as a function of epoch. We interpret our results to mean that, as reading progresses, it becomes more difficult to hold the eye still. Early in reading, the total fixation width is lower, the mean intersample distance is lower and there are frequent adjustments (i.e., changes in direction) to keep the eye well focused on the target word. As time progresses, it becomes more and more difficult to hold the eye perfectly steady, so fixations become wider and mean intersample distance increases.

17
Recording animal-view videos of the natural world

Vasas, V.; Lowell, M. C.; Villa, J.; Jamison, Q. D.; Siegle, A. G.; Katta, P. K. R.; Bhagavathula, P.; Kevan, P. G.; Fulton, D.; Losin, N.; Kepplinger, D.; Salehian, S.; Forkner, R. E.; Hanley, D.

2022-11-23 animal behavior and cognition 10.1101/2022.11.22.517269 medRxiv
Top 0.1%
7.7%
Show abstract

Plants, animals, and fungi display a rich tapestry of colors. Animals, in particular, use colors in dynamic displays performed in spatially complex environments. In such natural settings, light is reflected or refracted from objects with complex shapes that cast shadows and generate highlights. In addition, the illuminating light changes continuously as viewers and targets move through heterogeneous, continually fluctuating, light conditions. Although traditional spectrophotometric approaches for studying colors are objective and repeatable, they fail to document this complexity. Worse, they miss the temporal variation of color signals entirely. Here, we introduce hardware and software that provide ecologists and filmmakers the ability to accurately record animal-perceived colors in motion. Specifically, our Python codes transform photos or videos into perceivable units (quantum catches) for any animal of known photoreceptor sensitivity. We provide the plans, codes, and validation tests necessary for end-users to capture animal-view videos. This approach will allow ecologists to investigate how animals use colors in dynamic behavioral displays, the ways natural illumination alters perceived colors, and other questions that remained unaddressed until now due to a lack of suitable tools. Finally, our pipeline provides scientists and filmmakers with a new, empirically grounded approach for depicting the perceptual worlds of non-human animals.

18
The Visual Experience Evaluation Tool: A Myopia Research Instrument for Quantifying Visual Experience

Sullivan, D.; Nicholls, A.; Thompson, S.; Schwarzmiller, C.; Hatoun, G.; Memarzanjany, F.; Gunderson, A.; Danielson, A.; Lowes, J.; Petersen, J.; Backes, S.; Rutherford, L.

2024-09-23 neuroscience 10.1101/2024.09.20.614212 medRxiv
Top 0.1%
7.4%
Show abstract

Current myopia research has demonstrated the role of extended visual experience in healthy ocular development. Optical cues and the spectrum, intensity, and temporal characteristics of light landing on the retina are all known factors affecting the development of the eye. However, there is still limited understanding as to which of these extrinsic factors are most important or how they interplay with intrinsic physical and neural differences between individuals. Part of the problem is inadequate tooling. Our team at Reality Labs Research created the Visual Environment Evaluation Tool (VEET), a non-commercial research instrument, to accelerate myopia research. In this paper, we describe the VEETs physical design, sensor suite and capabilities, and the associated software which makes it well-suited for research of quantified visual experience.

19
Charting the Silent Signals of Social Gaze: Automating Eye Contact Assessment in Face-to-Face Conversations

Schmaelzle, R.; Jahn, N. T.; Bente, G. M.

2024-08-29 animal behavior and cognition 10.1101/2024.08.28.610064 medRxiv
Top 0.1%
7.3%
Show abstract

Social gaze is a crucial yet often overlooked aspect of nonverbal communication. During conversations, it typically operates subconsciously, following automatic co-regulation patterns. However, deviations from typical patterns, such as avoiding eye contact or excessive gazing, can significantly affect social interactions and perceived relationship quality. The principles and effects of social gaze have intrigued researchers across various fields, including communication science, social psychology, animal biology, and psychiatry. Despite its significance, research in social gaze has been limited by methodological challenges in assessing eye movements and gaze direction during natural social interactions. To address these obstacles, we have developed a new approach combining mobile eye tracking technology with automated analysis tools. In this paper, we introduce, validate, and apply a pipeline for recording and analyzing gaze behavior in dyadic conversations. We present a sample study where dyads engaged in two types of interactions: a get-to-know conversation and a conflictual conversation. Our new analysis pipeline corroborated previous findings, such as people directing more eye gaze while listening than talking, and gaze typically lasting about three seconds before averting. These results demonstrate the potential of our methodology to advance the study of social gaze in natural interactions.

20
Multimodal-Multisensory Experiments: Design and Implementation

Razavi, M.; Yamauchi, T.; Janfaza, V.; Leontyev, A.; Longmire-Monford, S.; Orr, J. M.

2020-12-02 neuroscience 10.1101/2020.12.01.405795 medRxiv
Top 0.1%
7.2%
Show abstract

The human mind is multimodal. Yet most behavioral studies rely on century-old measures of behavior - task accuracy and latency (response time). Multimodal and multisensory analysis of human behavior creates a better understanding of how the mind works. The problem is that designing and implementing these experiments is technically complex and costly. This paper introduces versatile and economical means of developing multimodal-multisensory human experiments. We provide an experimental design framework that automatically integrates and synchronizes measures including electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, virtual reality (VR), body movement, mouse/cursor motion and response time. Unlike proprietary systems (e.g., iMotions), our system is free and open-source; it integrates PsychoPy, Unity and Lab Streaming Layer (LSL). The system embeds LSL inside PsychoPy/Unity for the synchronization of multiple sensory signals - gaze motion, electroencephalogram (EEG), galvanic skin response (GSR), mouse/cursor movement, and body motion - with low-cost consumer-grade devices in a simple behavioral task designed by PsychoPy and a virtual reality environment designed by Unity. This tutorial shows a step-by-step process by which a complex multimodal-multisensory experiment can be designed and implemented in a few hours. When conducting the experiment, all of the data synchronization and recoding of the data to disk will be done automatically.